13,756 research outputs found

    Exact Lagrangian submanifolds in simply-connected cotangent bundles

    Full text link
    We consider exact Lagrangian submanifolds in cotangent bundles. Under certain additional restrictions (triviality of the fundamental group of the cotangent bundle, and of the Maslov class and second Stiefel-Whitney class of the Lagrangian submanifold) we prove such submanifolds are Floer-cohomologically indistinguishable from the zero-section. This implies strong restrictions on their topology. An essentially equivalent result was recently proved independently by Nadler, using a different approach.Comment: 28 pages, 3 figures. Version 2 -- derivation and discussion of the spectral sequence considerably expanded. Other minor change

    XNect: Real-time Multi-person 3D Human Pose Estimation with a Single RGB Camera

    No full text
    We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates in generic scenes and is robust to difficult occlusions both by other people and objects. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully-connected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that neither extracted global body positions nor joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes

    Practical Saccade Prediction for Head-Mounted Displays: Towards a Comprehensive Model

    Get PDF
    Eye-tracking technology is an integral component of new display devices suchas virtual and augmented reality headsets. Applications of gaze informationrange from new interaction techniques exploiting eye patterns togaze-contingent digital content creation. However, system latency is still asignificant issue in many of these applications because it breaks thesynchronization between the current and measured gaze positions. Consequently,it may lead to unwanted visual artifacts and degradation of user experience. Inthis work, we focus on foveated rendering applications where the quality of animage is reduced towards the periphery for computational savings. In foveatedrendering, the presence of latency leads to delayed updates to the renderedframe, making the quality degradation visible to the user. To address thisissue and to combat system latency, recent work proposes to use saccade landingposition prediction to extrapolate the gaze information from delayedeye-tracking samples. While the benefits of such a strategy have already beendemonstrated, the solutions range from simple and efficient ones, which makeseveral assumptions about the saccadic eye movements, to more complex andcostly ones, which use machine learning techniques. Yet, it is unclear to whatextent the prediction can benefit from accounting for additional factors. Thispaper presents a series of experiments investigating the importance ofdifferent factors for saccades prediction in common virtual and augmentedreality applications. In particular, we investigate the effects of saccadeorientation in 3D space and smooth pursuit eye-motion (SPEM) and how theirinfluence compares to the variability across users. We also present a simpleyet efficient correction method that adapts the existing saccade predictionmethods to handle these factors without performing extensive data collection.<br

    Transformation-aware Perceptual Image Metric

    Get PDF
    Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax

    A custom designed density estimation method for light transport

    No full text
    We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping

    Motion Parallax in Stereo 3D: Model and Applications

    Get PDF
    Binocular disparity is the main depth cue that makes stereoscopic images appear 3D. However, in many scenarios, the range of depth that can be reproduced by this cue is greatly limited and typically fixed due to constraints imposed by displays. For example, due to the low angular resolution of current automultiscopic screens, they can only reproduce a shallow depth range. In this work, we study the motion parallax cue, which is a relatively strong depth cue, and can be freely reproduced even on a 2D screen without any limits. We exploit the fact that in many practical scenarios, motion parallax provides sufficiently strong depth information that the presence of binocular depth cues can be reduced through aggressive disparity compression. To assess the strength of the effect we conduct psycho-visual experiments that measure the influence of motion parallax on depth perception and relate it to the depth resulting from binocular disparity. Based on the measurements, we propose a joint disparity-parallax computational model that predicts apparent depth resulting from both cues. We demonstrate how this model can be applied in the context of stereo and multiscopic image processing, and propose new disparity manipulation techniques, which first quantify depth obtained from motion parallax, and then adjust binocular disparity information accordingly. This allows us to manipulate the disparity signal according to the strength of motion parallax to improve the overall depth reproduction. This technique is validated in additional experiments

    Event Horizons in Numerical Relativity I: Methods and Tests

    Full text link
    This is the first paper in a series on event horizons in numerical relativity. In this paper we present methods for obtaining the location of an event horizon in a numerically generated spacetime. The location of an event horizon is determined based on two key ideas: (1) integrating backward in time, and (2) integrating the whole horizon surface. The accuracy and efficiency of the methods are examined with various sample spacetimes, including both analytic (Schwarzschild and Kerr) and numerically generated black holes. The numerically evolved spacetimes contain highly distorted black holes, rotating black holes, and colliding black holes. In all cases studied, our methods can find event horizons to within a very small fraction of a grid zone.Comment: 22 pages, LaTeX with RevTeX 3.0 macros, 20 uuencoded gz-compressed postscript figures. Also available at http://jean-luc.ncsa.uiuc.edu/Papers/ Submitted to Physical Review

    Influence of Josephson current second harmonic on stability of magnetic flux in long junctions

    Full text link
    We study the long Josephson junction (LJJ) model which takes into account the second harmonic of the Fourier expansion of Josephson current. The dependence of the static magnetic flux distributions on parameters of the model are investigated numerically. Stability of the static solutions is checked by the sign of the smallest eigenvalue of the associated Sturm-Liouville problem. New solutions which do not exist in the traditional model, have been found. Investigation of the influence of second harmonic on the stability of magnetic flux distributions for main solutions is performed.Comment: 4 pages, 6 figures, to be published in Proc. of Dubna-Nano2010, July 5-10, 2010, Russi
    • …
    corecore